The new GPUs are there, and raytracing is around the corner. A lot of people are talking about it, and it starts to be quite common in games going out nowadays.
In this tutorial, we will learn how we can leverage the API to get a raytracing effect, using the hardware. If you have compatible hardware, you will be able to follow this tutorial and design your raytraced effects with nkGraphics !
This tutorial will present the theory about raytracing and the architecture required within graphics APIs for that. It will then dig into how nkGraphics exposes these capabilities and how we can leverage them to get raytracing in our application !
When enabling raytracing effects, best thing to do first is to check for hardware support. While slowly starting to be present on all GPUs appearing recently, they are not in every computer yet.
To check support, we first need to access the Renderer, and query about its supported functionalities. The renderer is accessible through the GraphicsSystem, itself available in the MainSystem. Let's first include the relevant headers :
Which we can now use to query support information :
This will query the renderer for what it supports. Note that for raytracing to be supported, you need an RTX enabled card / API, meaning :
If the flag returned is true, then this means you can use the raytracing capabilities of the hardware.
Raytracing is fairly recent and bringing new concepts to the graphics API. In here, we will detail a bit the theory behind raytracing and how this has been translated to function calls.
Raytracing is a rendering technique, often opposed to rasterization, in which rays are shot in the virtual scene. The aim is to mimick what light is doing in real world : often go straight, hit and bounce on what's on the path. This is how we can see things around. Our eyes are sensors into which light randomly travels after bouncing in our surroundings. The light rays are then converted to a signal our brain can compile to an image.
Raytracing in rendering is basically trying to do the same things, although the path of light is generally worked backwards. Our virtual camera will shoot rays often organized per pixel, and try to get where they are coming from. To get that information means tracing the ray back, computing how it interacts with materials in the scene and how it bounces around.
This often means a lot of computing, as a scene is not traced by only one ray. Light is bouncing from everywhere to everywhere, being reflected, refracted, scattered, and whatsnot. This is why rasterization has always been used up till now for real-time applications : raytracing is very expensive. Rendering a complex scene with satisfying quality can take an insane amount of time. It is said that an image from a Pixar movie could take months to render on an average user machine.
However, the quality of raytracing is unbeatable. Rasterizing an image always uses complex tricks and precomputing to reach a good quality, with a lot of limitations. Light probes, shadow maps, screen space ambient occlusion and reflections, they all have limitations by willingfully ignoring a lot of information. In theory, raytracing does not suffer from those problem : everything is resolved in the living environment at runtime. No precomputing, no loss of information. Everything is available as you generate the images.
Raytracing being the holy grail in rendering, it is only normal effort is put to bring it to real-time world. But even with that in mind, Nvidia announcing this has been a break-through and a big step towards future. And now, we are starting to get raytracing in real-time, in real games !
More information in this topic can be found on Nvidia's website, and in the essentials series dedicated.
Let's see how that works under the hood. We have two big concepts to remember about :
From there, let's jump to first topic : the acceleration structure for scene traversal.
To trace the scene efficiently, an acceleration structure has to be used. It is represented by a BVH (Bounding Volume Hierarchy), in which geometry is reorganized in a tree from which a ray can progressively refine where the hit occurs. We won't go into the detail here, but some good information can be found here or here.
Within graphics API, the BVH has been approached in two steps :
As such, the bottom level takes the triangles to organize, and the top level reuses the bottom level resulting trees to build the final hierarchy. Advantage of that is that top level hierarchy can reference the same bottom level structure many times, at different spots in the world.
In nkGraphics however, a lot of those concepts are abstracted. Let's see how the acceleration structure can be generated :
And I fear this is it. Now, each time the render queue is updated with a mesh, the associated acceleration structure will be generated / updated. Note that this does not prevent the queue from being used normally during other rasterization steps. The aim is to be able to have the same render queue being used in hybrid rendering (mixing rasterization and raytracing effects).
This flag is off by default as it triggers generation of data you would not want in current general use-case. If a render queue has to be used for raytracing, it needs to be turned on manually.
Now that we have a good idea on how the structure can be generated, what is left is to know what to do when it is traversed and a geometry is hit. For that, graphics API introduced new shader concepts, along with what is called shader tables.
The idea is that when traversing a scene, all shader information has to be available to know what a ray should do when hitting a given triangle. As such, this information needs to be compiled in one big table the traversal code knows how to index, to get what program to use, with what resources.
Once more, nkGraphics tries to make it as easy as possible, and the only thing you need to worry about is setting an entity's raytracing hit shader :
The shader needed will be a hit shader, as documented later. This can be set next to the standard shader you would use for your rasterization needs. According to this data and the way the render queue is set up, nkGraphics will resolve everything and prepare the pipeline for you.
A last concept to have in mind within nkGraphics is the RaytracingPass. Triggering raytracing is done the same as for any other drawing steps : feed it to the Compositor.
Let's have a peek look into it, to understand how data fit in there :
The pass is taking the render queue to trace, and a shader about raygen and miss. The shader specified should have a program giving two entry points :
Raytracing shaders have been separated in different categories within graphics APIs, depending on what they seek to do :
With that in mind and what has been presented just before, a quick refresher :
As for now within nkGraphics, the only program type available is raytracing, independently of it having a raygen, miss, or hit shaders. In fact, all of those bits can be mixed up in one program, that can then be fed to the pipeline.
Let's first create our raygen and miss program :
There are some aspects to take into account here. First, DirectX requires the functions to be prefixed by their type. That's why we get our [shader("raygeneration")] and [shader("miss")] labels, depending on what they seek to do. Then, nkGraphics needs you to name the functions depending on their role : raygen for the ray generation function, and miss for the miss function. This is how it can find them back when linking everything together.
Next, while we recognize our constant buffer declaration, we can also notice something new : the RayPayload structure. This "payload" is an important concept within DXR, as it represents the data associated to one ray. As the ray travels in the scene and hit materials, it will get updated by the different functions it traverses. As such, we will use it to trace the color the ray should take.
Going forward, a new resource type is linked to the texture slot : the RaytracingAccelerationStructure. This corresponds to the acceleration structure we get from the render queue, once it is built. The acceleration structure is linked to programs like any resource and is a parameter required when launching rays in programs. Last resource linked is the final UAV buffer we will use to store our raytracing result.
First function declaration is the raygen one. Its role is to generate the rays we will use to traverse the scene. Here, what we do is use a ray's index in the grid to link it to a pixel in the target image, and reconstruct its position in the world by inverse-projecting it to world space. What's left is to prepare the ray's description, its starting payload, and launch it in the scene before writing the updated color to the UAV buffer. For more information on the functions used to trigger HLSL ray launching, a good start would be here.
Looking at the miss function, you can see the interaction of the scene traversal and the payload of a ray. It updates the payload's color, and calls it a day. The result is that when coming back from the TraceRay function, the raygen will have an updated payload after it went through the miss function (see how it is an inout parameter). Our miss function will be called whenever a ray hits no geometry... Which means we need something when it is hit !
This is declared as a separated program, as it will be used in different contexts compared to the raygen and miss program we just created. We define the payload used again, which needs to be common, and declare a [shader("closesthit")] function named "closestHit". Remember, names are important for nkGraphics, and this one is no exception.
Final bit, we alter the payload like we did with the miss function, and let it go away.
Now that we have the program, let's see how we can feed them to a Shader.
First with the raygen miss shader :
There is nothing really fancy compared to what we usually do when feeding data to programs. However, one part stands up, the one adding the acceleration structure from the render queue as a texture. This is basically how you can feed acceleration structures to programs and access them from a raytraced render queue.
Let's now dig the hit shader :
Which is quite straightforward as the program requires no resources.
Now that we have all the building blocks, we can proceed into creating the compositor :
Compositor will be quite simple : first clear the target, then raytrace the scene, and copy back the resulting buffer into the target. The most interesting bit might be the RaytracingPass : like a compute shader, it needs its dimensions to be communicated. Here, as we aim to cast one ray per pixel and render in 800x600, we set the dimensions accordingly.
Now that this is done, we can launch the program once the compositor is attached to the rendering context and get :
While not as impressive as what we could expect from raytracing, this image demonstrates the whole pipeline working together. The light part is the sphere we have, for which its hit shader is invoked. The dark part all around is the miss shader being invoked. Raytracing at its best !
In this tutorial, we saw how to use raytracing within nkGraphics :
While the result is not very impressive yet, we will push raytracing a bit more in next tutorial. When ready, be sure to check it out !
As a final note, if raytracing is something you have interest in, a very good book that can be advised is the Ray Tracing Gems. It will potentially introduce too much detail regarding DXR in itself if you are not interested, but interesting techniques are discussed after. And this concludes the tutorial !